52 research outputs found

    Randomized Signal Processing with Continuous Frames

    Get PDF
    This paper focuses on signal processing tasks in which the signal is transformed from the signal space to a higher dimensional phase space using a continuous frame, processed in this space, and synthesized to an output signal. For example, in a phase vocoder method, an audio signal is transformed to the time-frequency plane via the short time Fourier transform, manipulated there, and synthesized to an output audio signal. We show how to approximate such methods, termed phase space signal processing methods, using a Monte Carlo method. The Monte Carlo method speeds up computations, since the number of samples required for a certain accuracy is proportional to the dimension of the signal space, and not to the dimension of phase space, which is typically higher. We utilize this property for a new phase vocoder method, based on an enhanced time-frequency space, with more dimensions than the classical method. The higher dimension of phase space improves the quality of the method, while retaining the computational complexity of a standard phase vocoder based on regular samples

    CayleyNets: Graph Convolutional Neural Networks with Complex Rational Spectral Filters

    Full text link
    The rise of graph-structured data such as social networks, regulatory networks, citation graphs, and functional brain networks, in combination with resounding success of deep learning in various applications, has brought the interest in generalizing deep learning models to non-Euclidean domains. In this paper, we introduce a new spectral domain convolutional architecture for deep learning on graphs. The core ingredient of our model is a new class of parametric rational complex functions (Cayley polynomials) allowing to efficiently compute spectral filters on graphs that specialize on frequency bands of interest. Our model generates rich spectral filters that are localized in space, scales linearly with the size of the input data for sparsely-connected graphs, and can handle different constructions of Laplacian operators. Extensive experimental results show the superior performance of our approach, in comparison to other spectral domain convolutional architectures, on spectral image classification, community detection, vertex classification and matrix completion tasks

    Business growth ambitions amongst SMEs – changes over time and links to growth

    Get PDF
    This study is a follow up to a report published in 2012 which examined the level and determinants of growth ambition amongst UK SMEs . The purpose of this research is to resurvey the respondents to the 2012 study in order to generate new data which, in combination with secondary data on business performance, will provide answers to the following key research questions: • How does ambition change over time and what influences this? • What is the relationship between ambition and business performance? In addition to these major research aims, other study objectives were set which include examining the effect of growth ambition on productivity, employment growth and turnover growth and to examine and identify the policy relevance of the findings

    Approximately Equivariant Graph Networks

    Full text link
    Graph neural networks (GNNs) are commonly described as being permutation equivariant with respect to node relabeling in the graph. This symmetry of GNNs is often compared to the translation equivariance symmetry of Euclidean convolution neural networks (CNNs). However, these two symmetries are fundamentally different: The translation equivariance of CNNs corresponds to symmetries of the fixed domain acting on the image signal (sometimes known as active symmetries), whereas in GNNs any permutation acts on both the graph signals and the graph domain (sometimes described as passive symmetries). In this work, we focus on the active symmetries of GNNs, by considering a learning setting where signals are supported on a fixed graph. In this case, the natural symmetries of GNNs are the automorphisms of the graph. Since real-world graphs tend to be asymmetric, we relax the notion of symmetries by formalizing approximate symmetries via graph coarsening. We present a bias-variance formula that quantifies the tradeoff between the loss in expressivity and the gain in the regularity of the learned estimator, depending on the chosen symmetry group. To illustrate our approach, we conduct extensive experiments on image inpainting, traffic flow prediction, and human pose estimation with different choices of symmetries. We show theoretically and empirically that the best generalization performance can be achieved by choosing a suitably larger group than the graph automorphism group, but smaller than the full permutation group
    • …
    corecore